196 research outputs found

    GAN-based multiple adjacent brain MRI slice reconstruction for unsupervised alzheimer’s disease diagnosis

    Get PDF
    Unsupervised learning can discover various unseen diseases, relying on large-scale unannotated medical images of healthy subjects. Towards this, unsupervised methods reconstruct a single medical image to detect outliers either in the learned feature space or from high reconstruction loss. However, without considering continuity between multiple adjacent slices, they cannot directly discriminate diseases composed of the accumulation of subtle anatomical anomalies, such as Alzheimer's Disease (AD). Moreover, no study has shown how unsupervised anomaly detection is associated with disease stages. Therefore, we propose a two-step method using Generative Adversarial Network-based multiple adjacent brain MRI slice reconstruction to detect AD at various stages: (Reconstruction) Wasserstein loss with Gradient Penalty + L1 loss---trained on 3 healthy slices to reconstruct the next 3 ones---reconstructs unseen healthy/AD cases; (Diagnosis) Average/Maximum loss (e.g., L2 loss) per scan discriminates them, comparing the reconstructed/ground truth images. The results show that we can reliably detect AD at a very early stage with Area Under the Curve (AUC) 0.780 while also detecting AD at a late stage much more accurately with AUC 0.917; since our method is fully unsupervised, it should also discover and alert any anomalies including rare disease.Comment: 10 pages, 4 figures, Accepted to Lecture Notes in Bioinformatics (LNBI) as a volume in the Springer serie

    An edge-driven 3D region growing approach for upper airways morphology and volume evaluation in patients with Pierre Robin sequence

    Get PDF
    In this paper, a semi-automatic approach for segmentation of the upper airways is proposed. The implemented approach uses an edge-driven 3D region-growing algorithm to segment ROIs and 3D volume-rendering technique to reconstruct the 3D model of the upper airways. This method can be used to integrate information inside a medical decision support system, making it possible to enhance medical evaluation. The effectiveness of the proposed segmentation approach was evaluated using Jaccard (92.1733%) and dice (94.6441%) similarity indices and specificity (96.8895%) and sensitivity (97.6682%) rates. The proposed method achieved an average computation time reduced by a 16x factor with respect to manual segmentation

    CT Radiomic Features and Clinical Biomarkers for Predicting Coronary Artery Disease

    Get PDF
    This study was aimed to investigate the predictive value of the radiomics features extracted from pericoronaric adipose tissue & mdash; around the anterior interventricular artery (IVA) & mdash; to assess the condition of coronary arteries compared with the use of clinical characteristics alone (i.e., risk factors). Clinical and radiomic data of 118 patients were retrospectively analyzed. In total, 93 radiomics features were extracted for each ROI around the IVA, and 13 clinical features were used to build different machine learning models finalized to predict the impairment (or otherwise) of coronary arteries. Pericoronaric radiomic features improved prediction above the use of risk factors alone. In fact, with the best model (Random Forest + Mutual Information) the AUROC reached 0.820 +/- 0.076 . As a matter of fact, the combined use of both types of features (i.e., radiomic and clinical) allows for improved performance regardless of the feature selection method used. Experimental findings demonstrated that the use of radiomic features alone achieves better performance than the use of clinical features alone, while the combined use of both clinical and radiomic biomarkers further improves the predictive ability of the models. The main contribution of this work concerns: (i) the implementation of multimodal predictive models, based on both clinical and radiomic features, and (ii) a trusted system to support clinical decision-making processes by means of explainable classifiers and interpretable features

    A computational study on temperature variations in mrgfus treatments using prf thermometry techniques and optical probes

    Get PDF
    Structural and metabolic imaging are fundamental for diagnosis, treatment and follow-up in oncology. Beyond the well-established diagnostic imaging applications, ultrasounds are currently emerging in the clinical practice as a noninvasive technology for therapy. Indeed, the sound waves can be used to increase the temperature inside the target solid tumors, leading to apoptosis or necrosis of neoplastic tissues. The Magnetic resonance-guided focused ultrasound surgery (MRgFUS) technology represents a valid application of this ultrasound property, mainly used in oncology and neurology. In this paper; patient safety during MRgFUS treatments was investigated by a series of experiments in a tissue-mimicking phantom and performing ex vivo skin samples, to promptly identify unwanted temperature rises. The acquired MR images, used to evaluate the temperature in the treated areas, were analyzed to compare classical proton resonance frequency (PRF) shift techniques and referenceless thermometry methods to accurately assess the temperature variations. We exploited radial basis function (RBF) neural networks for referenceless thermometry and compared the results against interferometric optical fiber measurements. The experimental measurements were obtained using a set of interferometric optical fibers aimed at quantifying temperature variations directly in the sonication areas. The temperature increases during the treatment were not accurately detected by MRI-based referenceless thermometry methods, and more sensitive measurement systems, such as optical fibers, would be required. In-depth studies about these aspects are needed to monitor temperature and improve safety during MRgFUS treatments

    Calibrating the dice loss to handle neural network overconfidence for biomedical image segmentation

    Get PDF
    The Dice similarity coefficient (DSC) is both a widely used metric and loss function for biomedical image segmentation due to its robustness to class imbalance. However, it is well known that the DSC loss is poorly calibrated, resulting in overconfident predictions that cannot be usefully interpreted in biomedical and clinical practice. Performance is often the only metric used to evaluate segmentations produced by deep neural networks, and calibration is often neglected. However, calibration is important for translation into biomedical and clinical practice, providing crucial contextual information to model predictions for interpretation by scientists and clinicians. In this study, we provide a simple yet effective extension of the DSC loss, named the DSC++ loss, that selectively modulates the penalty associated with overconfident, incorrect predictions. As a standalone loss function, the DSC++ loss achieves significantly improved calibration over the conventional DSC loss across six well-validated open-source biomedical imaging datasets, including both 2D binary and 3D multi-class segmentation tasks. Similarly, we observe significantly improved calibration when integrating the DSC++ loss into four DSC-based loss functions. Finally, we use softmax thresholding to illustrate that well calibrated outputs enable tailoring of recall-precision bias, which is an important post-processing technique to adapt the model predictions to suit the biomedical or clinical task. The DSC++ loss overcomes the major limitation of the DSC loss, providing a suitable loss function for training deep learning segmentation models for use in biomedical and clinical practice. Source code is available at https://github.com/mlyg/DicePlusPlus

    Robustness Analysis of DCE-MRI-Derived Radiomic Features in Breast Masses: Assessing Quantization Levels and Segmentation Agreement

    Get PDF
    Featured Application The use of highly robust radiomic features is fundamental to reduce intrinsic dependencies and to provide reliable predictive models. This work presents a study on breast tumor DCE-MRI considering the radiomic feature robustness against the quantization settings and segmentation methods. Machine learning models based on radiomic features allow us to obtain biomarkers that are capable of modeling the disease and that are able to support the clinical routine. Recent studies have shown that it is fundamental that the computed features are robust and reproducible. Although several initiatives to standardize the definition and extraction process of biomarkers are ongoing, there is a lack of comprehensive guidelines. Therefore, no standardized procedures are available for ROI selection, feature extraction, and processing, with the risk of undermining the effective use of radiomic models in clinical routine. In this study, we aim to assess the impact that the different segmentation methods and the quantization level (defined by means of the number of bins used in the feature-extraction phase) may have on the robustness of the radiomic features. In particular, the robustness of texture features extracted by PyRadiomics, and belonging to five categories-GLCM, GLRLM, GLSZM, GLDM, and NGTDM-was evaluated using the intra-class correlation coefficient (ICC) and mean differences between segmentation raters. In addition to the robustness of each single feature, an overall index for each feature category was quantified. The analysis showed that the level of quantization (i.e., the 'bincount' parameter) plays a key role in defining robust features: in fact, in our study focused on a dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) dataset of 111 breast masses, sets with cardinality varying between 34 and 43 robust features were obtained with 'binCount' values equal to 256 and 32, respectively. Moreover, both manual segmentation methods demonstrated good reliability and agreement, while automated segmentation achieved lower ICC values. Considering the dependence on the quantization level, taking into account only the intersection subset among all the values of 'binCount' could be the best selection strategy. Among radiomic feature categories, GLCM, GLRLM, and GLDM showed the best overall robustness with varying segmentation methods

    A multimodal retina-iris biometric system using the levenshtein distance for spatial feature comparison

    Get PDF
    The recent developments of information technologies, and the consequent need for access to distributed services and resources, require robust and reliable authentication systems. Biometric systems can guarantee high levels of security and multimodal techniques, which combine two or more biometric traits, warranting constraints that are more stringent during the access phases. This work proposes a novel multimodal biometric system based on iris and retina combination in the spatial domain. The proposed solution follows the alignment and recognition approach commonly adopted in computational linguistics and bioinformatics; in particular, features are extracted separately for iris and retina, and the fusion is obtained relying upon the comparison score via the Levenshtein distance. We evaluated our approach by testing several combinations of publicly available biometric databases, namely one for retina images and three for iris images. To provide comprehensive results, detection error trade-off-based metrics, as well as statistical analyses for assessing the authentication performance, were considered. The best achieved False Acceptation Rate and False Rejection Rate indices were and 3.33%, respectively, for the multimodal retina-iris biometric approach that overall outperformed the unimodal systems. These results draw the potential of the proposed approach as a multimodal authentication framework using multiple static biometric traits

    Biochemical parameter estimation vs. benchmark functions: A comparative study of optimization performance and representation design

    Get PDF
    © 2019 Elsevier B.V. Computational Intelligence methods, which include Evolutionary Computation and Swarm Intelligence, can efficiently and effectively identify optimal solutions to complex optimization problems by exploiting the cooperative and competitive interplay among their individuals. The exploration and exploitation capabilities of these meta-heuristics are typically assessed by considering well-known suites of benchmark functions, specifically designed for numerical global optimization purposes. However, their performances could drastically change in the case of real-world optimization problems. In this paper, we investigate this issue by considering the Parameter Estimation (PE) of biochemical systems, a common computational problem in the field of Systems Biology. In order to evaluate the effectiveness of various meta-heuristics in solving the PE problem, we compare their performance by considering a set of benchmark functions and a set of synthetic biochemical models characterized by a search space with an increasing number of dimensions. Our results show that some state-of-the-art optimization methods – able to largely outperform the other meta-heuristics on benchmark functions – are characterized by considerably poor performances when applied to the PE problem. We also show that a limiting factor of these optimization methods concerns the representation of the solutions: indeed, by means of a simple semantic transformation, it is possible to turn these algorithms into competitive alternatives. We corroborate this finding by performing the PE of a model of metabolic pathways in red blood cells. Overall, in this work we state that classic benchmark functions cannot be fully representative of all the features that make real-world optimization problems hard to solve. This is the case, in particular, of the PE of biochemical systems. We also show that optimization problems must be carefully analyzed to select an appropriate representation, in order to actually obtain the performance promised by benchmark results
    • …
    corecore